Identifying preflare spectral features using explainable artificial intelligence

نویسندگان

چکیده

The prediction of solar flares is practical and scientific interest; however, many machine learning methods used for this task do not provide the physical explanations behind a model's performance. We made use two recently developed explainable artificial intelligence techniques called gradient-weighted class activation mapping (Grad-CAM) expected gradients (EG) to reveal decision-making process high-performance neural network that has been trained distinguish between MgII spectra derived from flaring nonflaring active regions, fact can be applied short timescale flare forecasting. generate visual (heatmaps) projected back onto spectra, allowing identification features are strongly associated with precursory activity. automated search interpretations on level individual wavelengths, multiple examples using IRIS spectral data, finding scores in general increase before onset. Large rasters cover significant portion region coincide small preflare brightenings both SDO/AIA images tend lead better forecasts. models triplet emission, flows, as well broad highly asymmetric all important prediction. Additionally, we find intensity only weakly correlated spectrum's score, meaning low still great importance task, $78$% time, position maximum attention along slit during phase predictive location flare's UV emission

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Building Explainable Artificial Intelligence Systems

As artificial intelligence (AI) systems and behavior models in military simulations become increasingly complex, it has been difficult for users to understand the activities of computer-controlled entities. Prototype explanation systems have been added to simulators, but designers have not heeded the lessons learned from work in explaining expert system behavior. These new explanation systems a...

متن کامل

Explainable Artificial Intelligence for Training and Tutoring

This paper describes an Explainable Artificial Intelligence (XAI) tool that allows entities to answer questions about their activities within a tactical simulation. We show how XAI can be used to provide more meaningful after-action reviews and discuss ongoing work to integrate an intelligent tutor into the XAI framework.

متن کامل

Explainable Artificial Intelligence via Bayesian Teaching

Modern machine learning methods are increasingly powerful and opaque. This opaqueness is a concern across a variety of domains in which algorithms are making important decisions that should be scrutable. The explainabilty of machine learning systems is therefore of increasing interest. We propose an explanation-byexamples approach that builds on our recent research in Bayesian teaching in which...

متن کامل

Automated Reasoning for Explainable Artificial Intelligence

Reasoning and learning have been considered fundamental features of intelligence ever since the dawn of the field of artificial intelligence, leading to the development of the research areas of automated reasoning and machine learning. This paper discusses the relationship between automated reasoning and machine learning, and more generally between automated reasoning and artificial intelligenc...

متن کامل

An Explainable Artificial Intelligence System for Small-unit Tactical Behavior

As the artificial intelligence (AI) systems in military simulations and computer games become more complex, their actions become increasingly difficult for users to understand. Expert systems for medical diagnosis have addressed this challenge though the addition of explanation generation systems that explain a system’s internal processes. This paper describes the AI architecture and associated...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Astronomy and Astrophysics

سال: 2023

ISSN: ['0004-6361', '1432-0746']

DOI: https://doi.org/10.1051/0004-6361/202244835